Hello everyone and welcome back to computer vision lecture series. In this lecture we
are going to talk about corner detection. We will start from feature points and then
we move on to see how corner is a very good example of a feature point. So let us go ahead.
We are going to discuss different properties of good interest points and Harris corner
corners. We will also look into the mathematical formulation of how Harris corner works and
we are also going to look into invariances and co-variance of Harris corners. It is important
because once you know these features of Harris corners you would know or you will be in a
better position to apply Harris corner detection in a given application.
So we started with filtering and we moved on to edges and corners are natural flow to
our low to high level computer vision tasks. So corners are more distinctive features than
edges because edges tend to change in one direction, tend to remain constant in one
direction whereas corners have lesser degrees of freedom and therefore they bring out distinctive
property or they have a distinctive characteristic that they are more unique as compared to edges.
So feature points are also called, corner is one of the feature points, it is also called
interest points, key points, basically these are all considered as local features.
So what are the main components of local features? We should find a distinctive set of key points.
The first step is detection of these features and then how to describe them in a vector
form, in a compact vector form. So once you detect the features you need to know how to
represent them into either a matrix or a vector form in your machine or in the algorithm.
This is one example given here and then at the end you match those features across images
to find different correspondences. What could be the applications of this feature
point detection? So when you think about it in image alignment, for example, a panorama
stretching is a direct application of feature points because in panorama the successive
images are changing slightly and you want to match a lot of features which are common
in this slightly changing images and you want to align them and that's where feature descriptors
or feature points come into play. In panorama you also need some blending techniques
because there is some changing brightness. So you know your feature points should be
robust to noise, specifically photometric transformations like illumination in this
case of panorama to be robust. Your feature point detector should be robust to this thing
so that you are able to stitch well. 3D reconstruction obviously you need multiple views of the same
image and you need so feature points is a direct application of 3D reconstruction.
In motion tracking, for example, robots when they are navigating in the real world, the
camera is moving along with the robot and it's mapping the scene continuously and in
order to navigate you need to track or keep attention towards the specific objects in
your neighborhood and in order to keep track in successive frames because you are using
a video camera you need to detect these feature points to be able to reconstruct a 3D virtual
world and therefore it becomes easier to navigate. Same goes for indexing and database retrieval.
For example, in Google Image Search you want a particular, let's say you want to look for
a red flower or something like that and redness is a feature that you are looking for and
when you put in the search it becomes a property of image retrieval. Another example is object
recognition where you can have a vector or matrix with a collection of gradients or a
histogram of gradients and then you use these features to recognize objects across different
images. This is an example for feature point matching across different views. Basically
you find that these two stop signs are similar in some sense that they are both red in color
with stop written in white and when you want to match or you want to find an equivalent
feature representing this circular patch you are able to find it here although it's a bit
rotated and a bit more resolved. However, feature point detection helps in finding similar features
across different or similar looking images as well. A basic template matching of this
feature will not give a good result because you need robust feature alignment across different
views. So as we discussed earlier when we were talking about correlation that one of
Presenters
Zugänglich über
Offener Zugang
Dauer
00:46:13 Min
Aufnahmedatum
2021-04-26
Hochgeladen am
2021-04-26 12:06:20
Sprache
en-US